Search Results

Search found 30514 results on 1221 pages for '10 04'.

Page 710/1221 | < Previous Page | 706 707 708 709 710 711 712 713 714 715 716 717  | Next Page >

  • media player or dj software

    - by Dale
    Been searching for quite some time for a player that will cross fade correctly. What I mean by that, most players have the ability to start fading with a given time left of the song (ex:10 secs) While at times that can be fine, but is there software or a plugin for software that can tell the difference between a song that fades out, or a song that has a cold ending? So far the best one out there that I have tested is PCDJ, but I am sure there has to be something that can distinguish between endings of songs. Should add...this is for windows. Running vista Thanks in advance

    Read the article

  • Fedora14 serial console how-to needed

    - by lamba2
    Has anyone ever got a serial console working in fedora 14 ? Is it as simple as adding to grub: serial --unit=0 --speed=38400 terminal --timeout=10 serial console and add to the kernel lines: console=tty0 console=ttyS0,38400 ??? If so, this isn't working for me. I have agetty installed, and im using minicom, although i've heard you can also use "screen /dev/ttyUSB0" on the client side. The /etc/init/serial.conf file suggests it should be working, but nothing. Currently getting no joy from any of this after 2 days. Does anyone know a method that definitely works on fedora 14 ? (no /etc/event.d/ needed or such) edit: Client side im using a null modem cable and usb-serial adaptor.

    Read the article

  • Memory cache Ubuntu 9.10 server x86 doesn't work as expected

    - by Matthijs
    We're using an Ubuntu 9.10 server to transfer Ghost-image files. We configured it only with Samba. And the DOS-clients connect to Samba. The latest updates are installed and so far the servers is running fine. When we image 10 pc's with the same image of 2 files of 2GB there's no disk activity. Everything is loaded in the RAM. There's 4GB in the server. But when we use 2 pc's with 2 different image files of 500 MB (8x) files then there's a lot of continuous disk activity. The speed is lower. So it seems that Ubuntu doesn't cache more then one big file. Are there settings to change this behaviour?

    Read the article

  • What metric captures why my OSX machine is so slow during XCode indexing

    - by Ben Flynn
    My entire machine OSX Lion machine slows down while XCode 4.4 is indexing. The CPU is less than 10% busy, I've got over 500 MB free memory, plenty of disk space, disk IO rate is not high, network activity is not high. Indexing just a few files can take minutes and builds are extremely slow. While this is going on, even loading a new web page in Chrome can be slow. Knowing how to fix it would be great, but more fundamentally how can I measure what is actually going slowly? What metrics should I be looking at? Nothing in Activity Monitor, iostat, top, or sar betray anything about what's going on to me. Even getting a man page is interminable.

    Read the article

  • modsecurity apache mod-security.conf missing

    - by TechMedicNYC
    Greetings Serverfaultians. I'm not a server guy as you can see from my noob score of 1 point. But maybe those more versed can help me. I'm using Ubuntu v13.10 32-bit Server and Apache2 v2.4.6 and I'm trying to set up and configure modsecurity and modevasive on an internet-exposed production/test server. I am trying to follow this tutorial: http://www.thefanclub.co.za/how-to/how-install-apache2-modsecurity-and-modevasive-ubuntu-1204-lts-server. But at step 3: Now add these rules to Apache2. Open a terminal window and enter: sudo vi /etc/apache2/mods-available/mod-security.conf This file does not exist. Any suggestions?

    Read the article

  • ntpstat response fine but server time out of sync

    - by zedoo
    Hi, I found out that the ntpd service that I've set up a few weeks ago on a Centos5 machine doesn't correctly synchronize the server time. I detected an offset of more than 5 minutes (by stopping ntpd and executing ntpdate). After setting up the service I checked the setup via ntpstat: [xxxx@xxx ~]$ ntpstat -q synchronised to local net at stratum 11 time correct to within 10 ms polling server every 1024 s I repeated this check every day and it always showed this output. Doesn't this output tell me that the server time is sane?

    Read the article

  • Cloud Computing - Multiple Physical Computers, One Logical Computer

    - by Koobz
    I know that you can set up multiple virtual machines per physical computer. I'm wondering if it's possible to make multiple physical computers behave as one logical unit? Fundamentally the way I imagine it working is that you can throw 10 computers into a facility one day. You've got one client that requires the equivalent of two computers worth, and 100 others that eat up the remaining 8. As demands change you're just reallocating logical resources, maybe the 2 computer client now requires a third physical system. You just add it to the cloud, and don't worry about sharding the database, or migrating data over to a new server. Can it work this way? If yes, why would anyone ever do things like partition their database servers anymore? Just add more computing resources. You scale horizontally with the hardware, but your server appears to scale vertically. There's no need to modify your application's infrastructure to support multiple databases etc.

    Read the article

  • Cloud Computing - Multiple Physical Computers, One Logical Computer

    - by bundini
    I know that you can set up multiple virtual machines per physical computer. I'm wondering if it's possible to make multiple physical computers behave as one logical unit? Fundamentally the way I imagine it working is that you can throw 10 computers into a facility one day. You've got one client that requires the equivalent of two computers worth, and 100 others that eat up the remaining 8. As demands change you're just reallocating logical resources, maybe the 2 computer client now requires a third physical system. You just add it to the cloud, and don't worry about sharding the database, or migrating data over to a new server. Can it work this way? If yes, why would anyone ever do things like hand partition their database servers anymore? Just add more computing resources. You scale horizontally with the hardware, but your server appears to scale vertically. There's no need to modify your application's supporting infrastructure to support multiple databases etc.

    Read the article

  • reduce timeout when connecting to wrong IP (XP-XP, windows explorer)

    - by Viki
    I have many shortcuts in the form \10.0.0.123\path in Windows Explorer (XP). Some of the IPs are sometimes dead (those vmware machines that are inactive). The problem is, when I try to open "Properties" on such shortcut (to correct the IP, or to delete it), Windows Explorer freezes for minutes. For very long time. Start menu freezes, too. This is very inconvenient. How can I reduce the windows explorer timeout when it is probing the connection to another XP share ?

    Read the article

  • video streaming infrastructure advice

    - by Alchemical
    We would like to set-up a live video-chat web site and are looking for basic recomendations for software and hardware set-up. Most streams will be broadcast live from a single person with a web cam, etc., and viewed by typically 1-10 people, although there could be up to 100+ viewers on the high side. Audio and video do not have to be super-high quality, but do need to be "good enough". The main point is to convey the basic info in the video (and audio). If occasionally the frame-rate drops low and then goes back to normal fairly soon, we could live with that. Budget is an issue, so we are in general looking for a lower cost solution that will give us most of what we need in temers of performance and quality. We are looking at Peer1 for co-lo. The rest of our web site will be .Net / Windows platform. We are open to looking at any platform for the best streaming solution, although our technical expertise is currently more on the Windows side.

    Read the article

  • 10,000 RPM HDD (WD VelociRaptor) vs SSD for OS?

    - by GiH
    I currently have a 10,00RPM 150GB Raptor that I use for Vista. I'm about to upgrade to Windows 7, and while doing that I thought I'd buy another drive and install Ubuntu 9.10 on it. I don't want to partition the current drive I have, but I don't need 150GB for another OS. So, I'm having trouble deciding whether its worth it to buy a 64 GB SSD at the same price point as the 150GB WD VelociRaptor? Or should I just get a 7,200 RPM drive for really cheap (around $50)? Would it be better to use an SSD for the OS than a mechanical drive? I could always get a 32GB SSD too... Oh, and I don't want to virtualize Ubuntu because I'm going to be testing to see the differences in networking and overall performance.

    Read the article

  • LM Sensors always returning same (invalid) value for one temp sensor

    - by pkaeding
    I am trying to monitor the temp sensors on a server, and plot them using Cacti. I have lm-sensors installed and working correctly. For example, here is the output from sensors: % sensors acpitz-virtual-0 Adapter: Virtual device temp1: +26.8 C (crit = +100.0 C) temp2: +32.0 C (crit = +60.0 C) coretemp-isa-0000 Adapter: ISA adapter Core 0: +36.0 C (high = +105.0 C, crit = +105.0 C) coretemp-isa-0001 Adapter: ISA adapter Core 1: +42.0 C (high = +105.0 C, crit = +105.0 C) However, when I try to get this data via SNMP, I get only one sensor's temperature correctly, and another one always returns 100.000 C: % snmpwalk -Os -c public -v 1 10.8.0.18 -m ALL lmTempSensors lmTempSensorsIndex.1 = INTEGER: 0 lmTempSensorsIndex.2 = INTEGER: 1 lmTempSensorsDevice.1 = STRING: temp1 lmTempSensorsDevice.2 = STRING: temp1 lmTempSensorsValue.1 = Gauge32: 26800 lmTempSensorsValue.2 = Gauge32: 100000 So, my question is two-fold: Why is the second sensor that is returned by SNMP giving a value of 100 C (when it should be 32 C) Why are my CPU core sensors not being returned by SNMP?

    Read the article

  • $PATH in Vim doesn't match Terminal

    - by donut
    I'm using MacVim and when I don't launch it from the Terminal (mvim) its $PATH does not include what I have set in my .bash_profile. It only seems to have the default values, /usr/bin:/bin:/usr/sbin:/sbin. I'm running OS X 10.5.8. Even if I could set it manually in my .vimrc that would be okay, though I would prefer it to pull from the same place as Terminal. I've tried following what one site suggested, adding let $PATH += /blah/foo:/bar/etc to no avail. Edit/Solution: See my answer below. MacVim has an option to fix this.

    Read the article

  • Reverse Proxy Methods for Hosting a Low-Bandwidth Dynamic Website

    - by Casey
    I am building a webcam w/ HTTP server that will be running from a low-bandwith connection. The content on the site will be changing every 5 to 10 minutes. Instead of serving files directly from this connection, are there hosting companies that can act as a reverse proxy for my site? Therefore, if nobody is using the site, the local internet connection remains idle. And if I receive 1000 hits all at the same time, only one HTTP GET is required, and the hosting company (on a fat pipe) continues serving the other 999 requests? This doesn't sound like a very common usage model, but I feel like this would be the optimal solution to my situation.

    Read the article

  • Setting up 802.1X wireless connection on OSX

    - by hizki
    I am an OSX user, I have Snow Leopard 10.6.5 and an updated AirPort. I am trying to connect to my university's wireless network, but it has a complex security that I am having trouble defining... Here there are instructions for connecting with Windows XP, Windows 7 and Linux. Can someone please instruct me what should I do to set up this network on my MAC? Thank you. P.S. I have had previous success in setting up this network, but I have no idea what I did that made it work. Since I updated my AirPort it worked only seldomly and very slowly... Before the update, even when it worked it never remembered my password.

    Read the article

  • Recover RAID 5 data after created new array instead of re-using

    - by Brigadieren
    Folks please help - I am a newb with a major headache at hand (perfect storm situation). I have a 3 1tb hdd on my ubuntu 11.04 configured as software raid 5. The data had been copied weekly onto another separate off the computer hard drive until that completely failed and was thrown away. A few days back we had a power outage and after rebooting my box wouldn't mount the raid. In my infinite wisdom I entered mdadm --create -f... command instead of mdadm --assemble and didn't notice the travesty that I had done until after. It started the array degraded and proceeded with building and syncing it which took ~10 hours. After I was back I saw that that the array is successfully up and running but the raid is not I mean the individual drives are partitioned (partition type f8 ) but the md0 device is not. Realizing in horror what I have done I am trying to find some solutions. I just pray that --create didn't overwrite entire content of the hard driver. Could someone PLEASE help me out with this - the data that's on the drive is very important and unique ~10 years of photos, docs, etc. Is it possible that by specifying the participating hard drives in wrong order can make mdadm overwrite them? when I do mdadm --examine --scan I get something like ARRAY /dev/md/0 metadata=1.2 UUID=f1b4084a:720b5712:6d03b9e9:43afe51b name=<hostname>:0 Interestingly enough name used to be 'raid' and not the host hame with :0 appended. Here is the 'sanitized' config entries: DEVICE /dev/sdf1 /dev/sde1 /dev/sdd1 CREATE owner=root group=disk mode=0660 auto=yes HOMEHOST <system> MAILADDR root ARRAY /dev/md0 metadata=1.2 name=tanserv:0 UUID=f1b4084a:720b5712:6d03b9e9:43afe51b Here is the output from mdstat cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid5 sdd1[0] sdf1[3] sde1[1] 1953517568 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU] unused devices: <none> fdisk shows the following: fdisk -l Disk /dev/sda: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000bf62e Device Boot Start End Blocks Id System /dev/sda1 * 1 9443 75846656 83 Linux /dev/sda2 9443 9730 2301953 5 Extended /dev/sda5 9443 9730 2301952 82 Linux swap / Solaris Disk /dev/sdb: 750.2 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000de8dd Device Boot Start End Blocks Id System /dev/sdb1 1 91201 732572001 8e Linux LVM Disk /dev/sdc: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00056a17 Device Boot Start End Blocks Id System /dev/sdc1 1 60801 488384001 8e Linux LVM Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000ca948 Device Boot Start End Blocks Id System /dev/sdd1 1 121601 976760001 fd Linux raid autodetect Disk /dev/dm-0: 1250.3 GB, 1250254913536 bytes 255 heads, 63 sectors/track, 152001 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/dm-0 doesn't contain a valid partition table Disk /dev/sde: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x93a66687 Device Boot Start End Blocks Id System /dev/sde1 1 121601 976760001 fd Linux raid autodetect Disk /dev/sdf: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xe6edc059 Device Boot Start End Blocks Id System /dev/sdf1 1 121601 976760001 fd Linux raid autodetect Disk /dev/md0: 2000.4 GB, 2000401989632 bytes 2 heads, 4 sectors/track, 488379392 cylinders Units = cylinders of 8 * 512 = 4096 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 524288 bytes / 1048576 bytes Disk identifier: 0x00000000 Disk /dev/md0 doesn't contain a valid partition table Per suggestions I did clean up the superblocks and re-created the array with --assume-clean option but with no luck at all. Is there any tool that will help me to revive at least some of the data? Can someone tell me what and how the mdadm --create does when syncs to destroy the data so I can write a tool to un-do whatever was done? After the re-creating of the raid I run fsck.ext4 /dev/md0 and here is the output root@tanserv:/etc/mdadm# fsck.ext4 /dev/md0 e2fsck 1.41.14 (22-Dec-2010) fsck.ext4: Superblock invalid, trying backup blocks... fsck.ext4: Bad magic number in super-block while trying to open /dev/md0 The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 Per Shanes' suggestion I tried root@tanserv:/home/mushegh# mkfs.ext4 -n /dev/md0 mke2fs 1.41.14 (22-Dec-2010) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=128 blocks, Stripe width=256 blocks 122101760 inodes, 488379392 blocks 24418969 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=0 14905 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 102400000, 214990848 and run fsck.ext4 with every backup block but all returned the following: root@tanserv:/home/mushegh# fsck.ext4 -b 214990848 /dev/md0 e2fsck 1.41.14 (22-Dec-2010) fsck.ext4: Invalid argument while trying to open /dev/md0 The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 <device> Any suggestions? Regards!

    Read the article

  • What Defines an AD Object as "Inactive"

    - by Malnizzle
    I am going to be using some DSQUERY/DSMOVE scripts to clean up my AD Domin. One option is to move inactive objects to a OU that has restrictive GPOs applied to it. Something like: DSQUERY computer -inactive 10 | DSMOVE -newparent <distinguished name of target OU> My question is what value defines an object, both user and computer, as "inactive" for a period of time? Is it the last time a computer was logged on to for computer accounts, and for users is it the last time that the user account logged on to a computer? But what if, say for example, I had a web server that wasn't rebooted and or logged into for a couple of months but remain powered on and functioning as normal, would it be defined as "inactive" where as technically it's still serving web pages and so on? Thanks for the help!

    Read the article

  • homebrew in mac lion

    - by user975352
    I'm beginner of mac lion(10.7.2). I don't know well about mac but ubuntu. I installed homebrew to my mac, and I did command below. $ brew install git and then $ brew update error: Could not resolve host: github.com; nodename nor servname provided, or not known while accessing https://github.com/mxcl/homebrew.git/info/refs fatal: HTTP request failed Error: Failed while executing git pull origin refs/heads/master:refs/remotes/origin/master What's happen in my mac? How to resolve this? Would you help me?

    Read the article

  • HP DL180 G6 P410 8x SATA 1TB, what is the optimal configuration?

    - by Oneiroi
    I have a HP DL180 G6 with a P410 raid controller. Presently this runs using 4x 1TB Samsung Spinpoint SATA drives, in a RAID10 configuration using default settings. I am about to add a backplane to increase the drive capacity from 4 to 12 drives, and I plan to install 4 more 1TB SATA Drives. The drives are matched and have close serial numbers (They arrived together in the Manufacturers pallet). Model HD103UJ 1000GB/7200rpm/32M Rated for 3GB/s I will also be installing RHEL 6.1 x86_64. My question is what would be the optimal RAID settings (stripe etc.) for this configuration? To recap: 8x Model HD103UJ 1000GB/7200rpm/32M Rated for 3GB/s RAID 10 configuration. Thanks in advance. Update for role: Server is to become an iscsi target for an internal openstack deployment currently underway. (Glance) Will also provide virtualisation through KVM

    Read the article

  • Xen DomU on DRBD device: barrier errors

    - by Halfgaar
    I'm testing setting up a Xen DomU with a DRBD storage for easy failover. Most of the time, immediatly after booting the DomU, I get an IO error: [ 3.153370] EXT3-fs (xvda2): using internal journal [ 3.277115] ip_tables: (C) 2000-2006 Netfilter Core Team [ 3.336014] nf_conntrack version 0.5.0 (3899 buckets, 15596 max) [ 3.515604] init: failsafe main process (397) killed by TERM signal [ 3.801589] blkfront: barrier: write xvda2 op failed [ 3.801597] blkfront: xvda2: barrier or flush: disabled [ 3.801611] end_request: I/O error, dev xvda2, sector 52171168 [ 3.801630] end_request: I/O error, dev xvda2, sector 52171168 [ 3.801642] Buffer I/O error on device xvda2, logical block 6521396 [ 3.801652] lost page write due to I/O error on xvda2 [ 3.801755] Aborting journal on device xvda2. [ 3.804415] EXT3-fs (xvda2): error: ext3_journal_start_sb: Detected aborted journal [ 3.804434] EXT3-fs (xvda2): error: remounting filesystem read-only [ 3.814754] journal commit I/O error [ 6.973831] init: udev-fallback-graphics main process (538) terminated with status 1 [ 6.992267] init: plymouth-splash main process (546) terminated with status 1 The manpage of drbdsetup says that LVM (which I use) doesn't support barriers (better known as tagged command queuing or native command queing), so I configured the drbd device not to use barriers. This can be seen in /proc/drbd (by "wo:f, meaning flush, the next method drbd chooses after barrier): 3: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r---- ns:2160152 nr:520204 dw:2680344 dr:2678107 al:3549 bm:9183 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0 And on the other host: 3: cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate C r---- ns:0 nr:2160152 dw:2160152 dr:0 al:0 bm:8052 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0 I also enabled the option disable_sendpage, as per the drbd docs: cat /sys/module/drbd/parameters/disable_sendpage Y I also tried adding barriers=0 to fstab as mount option. Still it sometimes says: [ 58.603896] blkfront: barrier: write xvda2 op failed [ 58.603903] blkfront: xvda2: barrier or flush: disabled I don't even know if ext3 has a nobarrier option. And, because only one of my storage systems is battery backed, it would not be smart anyway. Why does it still compain about barriers when I disabled that? Both host are: Debian: 6.0.4 uname -a: Linux 2.6.32-5-xen-amd64 drbd: 8.3.7 Xen: 4.0.1 Guest: Ubuntu 12.04 LTS uname -a: Linux 3.2.0-24-generic pvops drbd resource: resource drbdvm { meta-disk internal; device /dev/drbd3; startup { # The timeout value when the last known state of the other side was available. 0 means infinite. wfc-timeout 0; # Timeout value when the last known state was disconnected. 0 means infinite. degr-wfc-timeout 180; } syncer { # This is recommended only for low-bandwidth lines, to only send those # blocks which really have changed. #csums-alg md5; # Set to about half your net speed rate 60M; # It seems that this option moved to the 'net' section in drbd 8.4. (later release than Debian has currently) verify-alg md5; } net { # The manpage says this is recommended only in pre-production (because of its performance), to determine # if your LAN card has a TCP checksum offloading bug. #data-integrity-alg md5; } disk { # Detach causes the device to work over-the-network-only after the # underlying disk fails. Detach is not default for historical reasons, but is # recommended by the docs. # However, the Debian defaults in drbd.conf suggest the machine will reboot in that event... on-io-error detach; # LVM doesn't support barriers, so disabling it. It will revert to flush. Check wo: in /proc/drbd. If you don't disable it, you get IO errors. no-disk-barrier; } on host1 { # universe is a VG disk /dev/universe/drbdvm-disk; address 10.0.0.1:7792; } on host2 { # universe is a VG disk /dev/universe/drbdvm-disk; address 10.0.0.2:7792; } } DomU cfg: bootloader = '/usr/lib/xen-default/bin/pygrub' vcpus = '2' memory = '512' # # Disk device(s). # root = '/dev/xvda2 ro' disk = [ 'phy:/dev/drbd3,xvda2,w', 'phy:/dev/universe/drbdvm-swap,xvda1,w', ] # # Hostname # name = 'drbdvm' # # Networking # # fake IP for posting vif = [ 'ip=1.2.3.4,mac=00:16:3E:22:A8:A7' ] # # Behaviour # on_poweroff = 'destroy' on_reboot = 'restart' on_crash = 'restart' In my test setup: the primary host's storage is 9650SE SATA-II RAID PCIe with battery. The secondary is software RAID1. Isn't DRBD+Xen widely used? With these problems, it's not going to work.

    Read the article

  • postgres memory allocation tuning 2

    - by pstanton
    i've got a Ubuntu Linux system with 12Gb memory most of which (at least 10Gb) can be allocated solely to postgres. the system also has a 6 disk 15k SCSI RAID 10 setup. The process i'm trying to optimise is twofold. firstly a single threaded, single connection will do many inserts into 2-4 tables linked by foreign key. secondly many different complex queries are run against the resulting data, using group by extensively. this part especially needs to be optimised. i have four of these processes running at once in order to make use of the quad core CPU, therefore there will generally be no more than 5 concurrent connections (1 spare for admin tasks). what configuration changes to the default Postgres config would you recommend? I'm looking for the optimum values for things like work_mem, shared_buffers etc. relevant doco thanks!

    Read the article

  • Why is Apple System Image Utility so slow?

    - by Jon Rhoades
    I'm using Apple System Image Utility (SIU) on Snow Leopard 10.6.2 and I am rather disturbed it takes over Three hours to make a Netrestore or Netboot image. I'm using as the donor machine a brand new iMac and as the imaging machine a brand new iMac connected using target disk mode & Firewire 800. The hard drive size and subsequent image is about 8GB. To restore the image over the network takes about 4 minutes. Given that Norton Ghost will take an image in about 5 minutes (or less on newer machines) over USB2, why is the Mac over an order of magnitude slower?

    Read the article

  • Macbook cannot see specific wireless network after being connected to it for a while

    - by donut
    Okay, so there's a single wireless network that my laptop has troubles with. My Macbook Pro used to be fine with it until it changed to using channel 13 (or 11?). Since then, after being connected to it for a while it disappears from my laptop's view. Other networks are showing up fine and other computers (including several Macs) have no troubles connecting to this network. If I clear my system cache using Onyx and then restart (sometimes a couple times) my laptop can see and connect to it again. But it seems that if I disconnect and try reconnecting I have to clear my cache again. One thing to note is that if I put my computer to sleep while connected to this network it has no problems reconnecting on wake up. I've got a 15" Macbook Pro 2,2 with Leopard 10.5.8.

    Read the article

  • PDF rendering issue os OSX

    - by 2Ti
    I came across some very odd rendering when trying to view a PDF file that needed to print out. I was wondering anyone has come across a similar problem before or has any ideas as to what might be causing this. PDF when viewed on OSX 10.7.4 - Preview version 6.0. I've tried opening the file in Skim but that doesn't work either. PDF as it should be, and as Chrome renders it in browser, but not if I download it onto my machine. Illustrator complains about "an unknown imaging construct" when I open the file, but renders it fine nevertheless, Photoshop doesn't have any problems either.

    Read the article

  • A minimal Linux distribution for my ASUS EEE PC

    - by Andrioid
    I recently bought myself a ASUS EEE 1000HE and I intend to use it for note-taking and light browsing at the University. The machine has a 10" screen so the interface needs to be very compact. I've already tried: EEEbuntu: Very nice driver support and out of the box experience. But I feel that it is too slow booting and the general experience is too heavy in my opinion. Moblin 2: Looks very cool, boots just fine but is way too unstable to use. Also find it annoying that I can't find hotkey documentation anywhere. Any Netbook OS recommendation welcome (although those specific to my model would be great). There is an entire jungle of distributions out there, so if you've been on a safari, please share your experience.

    Read the article

< Previous Page | 706 707 708 709 710 711 712 713 714 715 716 717  | Next Page >